知识图表(kg)作为从大型自然语言文本语料库中举行蒸馏信息的伟大工具。查询知识图表的自然语言问题对于这些信息的人类消费至关重要。通常通过将自然语言查询转换为结构化查询,然后在kg上触发结构化查询来解决此问题。在文献中的知识图中直接回答模型很少。查询转换模型和直接模型都需要与知识图表的域有关的特定培训数据。在这项工作中,我们将通过知识图表的自然语言问题转换为前提假设对的推理问题。使用培训的深度学习模型进行转换后的代理推理问题,我们为原始自然语言查询问题提供了解决方案。我们的方法在MetaQA数据集中实现了超过90%的准确性,击败现有的最先进。我们还提出了一种推论称为分层复发路径编码器(HRPE)的模型。可以微调推断模型以跨越跨越培训数据的域使用。我们的方法不需要大型域特定的培训数据来查询来自不同域的新知识图表。
translated by 谷歌翻译
深度学习模型在各种自然语言处理任务中设置了基准。然而,这些模型需要巨大的培训数据,这在许多实际问题中是不可行的。虽然各种技术如域适应,但是几个学习技术解决了这个问题,我们介绍了一种积极地将外部知识的新技术引入学习以解决低数据制度问题。我们提出了一种称为Actknow的技术,它基于知识图(KG)的“按需”在学习中,激发了知识图表(KG)的知识(QA)。通过从概念网络中注入世界知识,我们对基于文本的基于文本的变压器模型的临时挑战 - 在低数据制度中的变压器模型上显示了显着的改进。例如,通过仅使用20%的训练示例,我们分别证明了弧形挑战和OpenBookQA的准确性提高了4%。
translated by 谷歌翻译
A "heart attack" or myocardial infarction (MI), occurs when an artery supplying blood to the heart is abruptly occluded. The "gold standard" method for imaging MI is Cardiovascular Magnetic Resonance Imaging (MRI), with intravenously administered gadolinium-based contrast (late gadolinium enhancement). However, no "gold standard" fully automated method for the quantification of MI exists. In this work, we propose an end-to-end fully automatic system (MyI-Net) for the detection and quantification of MI in MRI images. This has the potential to reduce the uncertainty due to the technical variability across labs and inherent problems of the data and labels. Our system consists of four processing stages designed to maintain the flow of information across scales. First, features from raw MRI images are generated using feature extractors built on ResNet and MoblieNet architectures. This is followed by the Atrous Spatial Pyramid Pooling (ASPP) to produce spatial information at different scales to preserve more image context. High-level features from ASPP and initial low-level features are concatenated at the third stage and then passed to the fourth stage where spatial information is recovered via up-sampling to produce final image segmentation output into: i) background, ii) heart muscle, iii) blood and iv) scar areas. New models were compared with state-of-art models and manual quantification. Our models showed favorable performance in global segmentation and scar tissue detection relative to state-of-the-art work, including a four-fold better performance in matching scar pixels to contours produced by clinicians.
translated by 谷歌翻译
In the Earth's magnetosphere, there are fewer than a dozen dedicated probes beyond low-Earth orbit making in-situ observations at any given time. As a result, we poorly understand its global structure and evolution, the mechanisms of its main activity processes, magnetic storms, and substorms. New Artificial Intelligence (AI) methods, including machine learning, data mining, and data assimilation, as well as new AI-enabled missions will need to be developed to meet this Sparse Data challenge.
translated by 谷歌翻译
In this paper, a complete framework for Autonomous Self Driving is implemented. LIDAR, Camera and IMU sensors are used together. The entire data communication is managed using Robot Operating System which provides a robust platform for implementation of Robotics Projects. Jetson Nano is used to provide powerful on-board processing capabilities. Sensor fusion is performed on the data received from the different sensors to improve the accuracy of the decision making and inferences that we derive from the data. This data is then used to create a localized map of the environment. In this step, the position of the vehicle is obtained with respect to the Mapping done using the sensor data.The different SLAM techniques used for this purpose are Hector Mapping and GMapping which are widely used mapping techniques in ROS. Apart from SLAM that primarily uses LIDAR data, Visual Odometry is implemented using a Monocular Camera. The sensor fused data is then used by Adaptive Monte Carlo Localization for car localization. Using the localized map developed, Path Planning techniques like "TEB planner" and "Dynamic Window Approach" are implemented for autonomous navigation of the vehicle. The last step in the Project is the implantation of Control which is the final decision making block in the pipeline that gives speed and steering data for the navigation that is compatible with Ackermann Kinematics. The implementation of such a control block under a ROS framework using the three sensors, viz, LIDAR, Camera and IMU is a novel approach that is undertaken in this project.
translated by 谷歌翻译
Verifying the input-output relationships of a neural network so as to achieve some desired performance specification is a difficult, yet important, problem due to the growing ubiquity of neural nets in many engineering applications. We use ideas from probability theory in the frequency domain to provide probabilistic verification guarantees for ReLU neural networks. Specifically, we interpret a (deep) feedforward neural network as a discrete dynamical system over a finite horizon that shapes distributions of initial states, and use characteristic functions to propagate the distribution of the input data through the network. Using the inverse Fourier transform, we obtain the corresponding cumulative distribution function of the output set, which can be used to check if the network is performing as expected given any random point from the input set. The proposed approach does not require distributions to have well-defined moments or moment generating functions. We demonstrate our proposed approach on two examples, and compare its performance to related approaches.
translated by 谷歌翻译
We provide a brief, and inevitably incomplete overview of the use of Machine Learning (ML) and other AI methods in astronomy, astrophysics, and cosmology. Astronomy entered the big data era with the first digital sky surveys in the early 1990s and the resulting Terascale data sets, which required automating of many data processing and analysis tasks, for example the star-galaxy separation, with billions of feature vectors in hundreds of dimensions. The exponential data growth continued, with the rise of synoptic sky surveys and the Time Domain Astronomy, with the resulting Petascale data streams and the need for a real-time processing, classification, and decision making. A broad variety of classification and clustering methods have been applied for these tasks, and this remains a very active area of research. Over the past decade we have seen an exponential growth of the astronomical literature involving a variety of ML/AI applications of an ever increasing complexity and sophistication. ML and AI are now a standard part of the astronomical toolkit. As the data complexity continues to increase, we anticipate further advances leading towards a collaborative human-AI discovery.
translated by 谷歌翻译
对图像分类器的最新基于模型的攻击压倒性地集中在单对象(即单个主体对象)图像上。与此类设置不同,我们解决了一个更实用的问题,即使用多对象(即多个主导对象)图像生成对抗性扰动,因为它们代表了大多数真实世界场景。我们的目标是设计一种攻击策略,该策略可以通过利用此类图像中固有的本地贴片差异来从此类自然场景中学习(例如,对象上的局部贴片在“人”上的局部贴片与在交通场景中的对象`自行车'之间的差异)。我们的关键想法是:为了误解对抗性的多对象图像,图像中的每个本地贴片都会使受害者分类器感到困惑。基于此,我们提出了一种新颖的生成攻击(称为局部斑块差异或LPD攻击),其中新颖的对比损失函数使用上述多对象场景特征空间的局部差异来优化扰动生成器。通过各种受害者卷积神经网络的各种实验,我们表明我们的方法在不同的白色盒子和黑色盒子设置下进行评估时,我们的方法优于基线生成攻击,具有高度可转移的扰动。
translated by 谷歌翻译
基于优化的元学习旨在学习初始化,以便在一些梯度更新中可以学习新的看不见的任务。模型不可知的元学习(MAML)是一种包括两个优化回路的基准算法。内部循环致力于学习一项新任务,并且外循环导致元定义。但是,Anil(几乎没有内部环)算法表明,功能重用是MAML快速学习的替代方法。因此,元定义阶段使MAML用于特征重用,并消除了快速学习的需求。与Anil相反,我们假设可能需要在元测试期间学习新功能。从非相似分布中进行的一项新的看不见的任务将需要快速学习,并重用现有功能。在本文中,我们调用神经网络的宽度深度二元性,其中,我们通过添加额外的计算单元(ACU)来增加网络的宽度。 ACUS可以在元测试任务中学习新的原子特征,而相关的增加宽度有助于转发通行证中的信息传播。新学习的功能与最后一层的现有功能相结合,用于元学习。实验结果表明,我们提出的MAC方法的表现优于现有的非相似任务分布的Anil算法,约为13%(5次任务设置)
translated by 谷歌翻译
制作对抗性攻击的大多数方法都集中在具有单个主体对象的场景上(例如,来自Imagenet的图像)。另一方面,自然场景包括多个在语义上相关的主要对象。因此,探索设计攻击策略至关重要,这些攻击策略超出了在单对象场景上学习或攻击单对象受害者分类器。由于其固有的属性将扰动向未知模型的强大可传递性强,因此本文介绍了使用生成模型对多对象场景的对抗性攻击的第一种方法。为了代表输入场景中不同对象之间的关系,我们利用开源的预训练的视觉语言模型剪辑(对比语言图像 - 预训练),并动机利用语言中的编码语义来利用编码的语义空间与视觉空间一起。我们称这种攻击方法生成对抗性多对象场景攻击(GAMA)。 GAMA展示了剪辑模型作为攻击者的工具的实用性,以训练可强大的扰动发电机为多对象场景。使用联合图像文本功能来训练发电机,我们表明GAMA可以在各种攻击环境中制作有效的可转移扰动,以欺骗受害者分类器。例如,GAMA触发的错误分类比在黑框设置中的最新生成方法高出约16%,在黑框设置中,分类器体系结构和攻击者的数据分布都与受害者不同。我们的代码将很快公开提供。
translated by 谷歌翻译